Purpose: Tracking the 3D motion of the surgical tool and the patient anatomy is a fundamental requirement for computer-assisted skull-base surgery. The estimated motion can be used both for intra-operative guidance and for downstream skill analysis. Recovering such motion solely from surgical videos is desirable, as it is compliant with current clinical workflows and instrumentation. Methods: We present Tracker of Anatomy and Tool (TAToo). TAToo jointly tracks the rigid 3D motion of patient skull and surgical drill from stereo microscopic videos. TAToo estimates motion via an iterative optimization process in an end-to-end differentiable form. For robust tracking performance, TAToo adopts a probabilistic formulation and enforces geometric constraints on the object level. Results: We validate TAToo on both simulation data, where ground truth motion is available, as well as on anthropomorphic phantom data, where optical tracking provides a strong baseline. We report sub-millimeter and millimeter inter-frame tracking accuracy for skull and drill, respectively, with rotation errors below 1{\deg}. We further illustrate how TAToo may be used in a surgical navigation setting. Conclusion: We present TAToo, which simultaneously tracks the surgical tool and the patient anatomy in skull-base surgery. TAToo directly predicts the motion from surgical videos, without the need of any markers. Our results show that the performance of TAToo compares favorably to competing approaches. Future work will include fine-tuning of our depth network to reach a 1 mm clinical accuracy goal desired for surgical applications in the skull base.
translated by 谷歌翻译
在人机协作中,机器人错误是不可避免的 - 损害用户信任,愿意共同努力以及任务绩效。先前的工作表明,人们自然会对机器人错误的社会响应,并且在社交互动中,可以使用人类反应来检测错误。但是,在非社交,人类机器人协作(例如组装和工具检索)的领域中,几乎没有探索。在这项工作中,我们研究了人们对机器人错误的有机社会反应如何用于及时自动检测物理人类机器人相互作用中的错误。我们进行了一项数据收集研究,以获取面部响应以培训实时检测算法和案例研究,以探索我们通过不同的任务设置和错误的方法的普遍性。我们的结果表明,自然的社会响应是即使在非社会上下文中的机器人错误及时检测和定位的有效信号,并且我们的方法在各种任务上下文,机器人错误和用户响应中都具有牢固性。这项工作有助于无需详细的任务规格检测强大的错误检测。
translated by 谷歌翻译
现在,人工智能(AI)可以自动解释医学图像以供临床使用。但是,AI在介入图像中的潜在用途(相对于参与分类或诊断的图像),例如在手术期间的指导,在很大程度上尚未开发。这是因为目前,使用现场分析对现场手术收集的数据进行了事后分析,这是因为手术AI系统具有基本和实际限制,包括道德考虑,费用,可扩展性,数据完整性以及缺乏地面真相。在这里,我们证明从人类模型中创建逼真的模拟图像是可行的替代方法,并与大规模的原位数据收集进行了补充。我们表明,对现实合成数据的训练AI图像分析模型,结合当代域的概括或适应技术,导致在实际数据上的模型与在精确匹配的真实数据训练集中训练的模型相当地执行的模型。由于从基于人类的模型尺度的合成生成培训数据,因此我们发现我们称为X射线图像分析的模型传输范式(我们称为Syntheex)甚至可以超越实际数据训练的模型,因为训练的有效性较大的数据集。我们证明了合成在三个临床任务上的潜力:髋关节图像分析,手术机器人工具检测和COVID-19肺病变分割。 Synthex提供了一个机会,可以极大地加速基于X射线药物的智能系统的概念,设计和评估。此外,模拟图像环境还提供了测试新颖仪器,设计互补手术方法的机会,并设想了改善结果,节省时间或减轻人为错误的新技术,从实时人类数据收集的道德和实际考虑方面摆脱了人为错误。
translated by 谷歌翻译
主管机器人辅助外科医生的需求逐步扩大,因为由于其临床优势,机器人辅助手术已逐渐变得越来越受欢迎。为了满足这一需求并为外科医生提供更好的外科教育,我们通过整合人工智能外科模块和增强现实可视化来开发一种新的机器人外科教育系统。人工智能融入了强化倾向于从专家演示中学习,然后产生3D引导轨迹,提供完整外科手术的外科语境意识。轨迹信息在DVRK中的立体观看者中进一步可视化,以及诸如文本提示的其他信息,其中用户可以感知3D指导并学习过程。拟议的系统通过初步试验来评估外科教育任务挂接转移,这证明了其可行性和潜力作为下一代机器人辅助手术教育解决方案。
translated by 谷歌翻译
时间一致的深度估计对于诸如增强现实之类的实时应用至关重要。虽然立体声深度估计已经接受了显着的注意,导致逐帧的改进,虽然相对较少的工作集中在跨越帧的时间一致性。实际上,基于我们的分析,当前立体声深度估计技术仍然遭受不良时间一致性。由于并发对象和摄像机运动,在动态场景中稳定深度是挑战。在在线设置中,此过程进一步加剧,因为只有过去的帧可用。在本文中,我们介绍了一种技术,在线设置中的动态场景中产生时间一致的深度估计。我们的网络增强了具有新颖运动和融合网络的当前每帧立体声网络。通过预测每个像素SE3变换,运动网络占对象和相机运动。融合网络通过用回归权重聚合当前和先前预测来提高预测的一致性。我们在各种数据集中进行广泛的实验(合成,户外,室内和医疗)。在零射泛化和域微调中,我们证明我们所提出的方法在数量和定性的时间稳定和每个帧精度方面优于竞争方法。我们的代码将在线提供。
translated by 谷歌翻译
外科模拟器不仅允许规划和培训复杂的程序,而且还提供了为算法开发产生结构化数据的能力,这可以应用于图像引导的计算机辅助干预措施。虽然在外科医生或数据生成引擎的发展培训平台上,但我们知识的这两个功能尚未一起提供。我们展示了我们的开发成本效益和协同框架,命名为异步多体框架加(AMBF +),它与练习其外科技能的用户同时生成下游算法开发的数据。 AMBF +在虚拟现实(VR)设备上提供立体显示器,并触觉外科仿真的触觉反馈。它还可以生成不同的数据,例如对象姿势和分段图。 AMBF +采用柔性插件设置设计,可允许仿真仿真不同外科手术。我们将AMBF +的一个用例显示为虚拟钻探模拟器,用于横向颅底手术,用户可以使用虚拟手术钻积极地修改患者解剖结构。我们进一步演示如何生成的数据可用于验证和培训下游计算机视觉算法
translated by 谷歌翻译
机器人辅助手术现已在临床实践中成熟,已成为几种临床适应症的黄金标准临床治疗选择。有机器人辅助手术的领域预计将在未来十年内大幅增长,其中一系列新的机器人设备出现以解决不同专业的未满足临床需求。充满活力的手术机器人研究界是概念化这种新系统的关键,以及开发和培训工程师和科学家们将它们转化为实践。 Da Vinci研究套件(DVRK),学术界和行业合作努力重新登记达芬奇外科系统(直观的Surgical Inc,USA)作为用于外科机器人研究的研究平台,是解决A的关键倡议在外科机器人中进入新研究群体的障碍。在本文中,我们对过去十年来的DVRK促进的出版物进行了广泛的审查。我们将研究努力分类为不同的类别,并概述了机器人社区的一些主要挑战和需求,以维护这一倡议并建立在它上面。
translated by 谷歌翻译
We present temporally layered architecture (TLA), a biologically inspired system for temporally adaptive distributed control. TLA layers a fast and a slow controller together to achieve temporal abstraction that allows each layer to focus on a different time-scale. Our design is biologically inspired and draws on the architecture of the human brain which executes actions at different timescales depending on the environment's demands. Such distributed control design is widespread across biological systems because it increases survivability and accuracy in certain and uncertain environments. We demonstrate that TLA can provide many advantages over existing approaches, including persistent exploration, adaptive control, explainable temporal behavior, compute efficiency and distributed control. We present two different algorithms for training TLA: (a) Closed-loop control, where the fast controller is trained over a pre-trained slow controller, allowing better exploration for the fast controller and closed-loop control where the fast controller decides whether to "act-or-not" at each timestep; and (b) Partially open loop control, where the slow controller is trained over a pre-trained fast controller, allowing for open loop-control where the slow controller picks a temporally extended action or defers the next n-actions to the fast controller. We evaluated our method on a suite of continuous control tasks and demonstrate the advantages of TLA over several strong baselines.
translated by 谷歌翻译
The xView2 competition and xBD dataset spurred significant advancements in overhead building damage detection, but the competition's pixel level scoring can lead to reduced solution performance in areas with tight clusters of buildings or uninformative context. We seek to advance automatic building damage assessment for disaster relief by proposing an auxiliary challenge to the original xView2 competition. This new challenge involves a new dataset and metrics indicating solution performance when damage is more local and limited than in xBD. Our challenge measures a network's ability to identify individual buildings and their damage level without excessive reliance on the buildings' surroundings. Methods that succeed on this challenge will provide more fine-grained, precise damage information than original xView2 solutions. The best-performing xView2 networks' performances dropped noticeably in our new limited/local damage detection task. The common causes of failure observed are that (1) building objects and their classifications are not separated well, and (2) when they are, the classification is strongly biased by surrounding buildings and other damage context. Thus, we release our augmented version of the dataset with additional object-level scoring metrics https://gitlab.kitware.com/dennis.melamed/xfbd to test independence and separability of building objects, alongside the pixel-level performance metrics of the original competition. We also experiment with new baseline models which improve independence and separability of building damage predictions. Our results indicate that building damage detection is not a fully-solved problem, and we invite others to use and build on our dataset augmentations and metrics.
translated by 谷歌翻译
Knowledge of the symmetries of reinforcement learning (RL) systems can be used to create compressed and semantically meaningful representations of a low-level state space. We present a method of automatically detecting RL symmetries directly from raw trajectory data without requiring active control of the system. Our method generates candidate symmetries and trains a recurrent neural network (RNN) to discriminate between the original trajectories and the transformed trajectories for each candidate symmetry. The RNN discriminator's accuracy for each candidate reveals how symmetric the system is under that transformation. This information can be used to create high-level representations that are invariant to all symmetries on a dataset level and to communicate properties of the RL behavior to users. We show in experiments on two simulated RL use cases (a pusher robot and a UAV flying in wind) that our method can determine the symmetries underlying both the environment physics and the trained RL policy.
translated by 谷歌翻译